Understanding and Improving Human Data Relations

Alex Bowyer

Section V: Conclusions & Outlook

Introduction to Section V

The purpose of Section V is to reflect upon the learnings of this research and make them actionable and well-contextualised to future designers, researchers and innovators. Chapter 10 reflects back upon the thesis to produce a set of clear and actionable principles for better Human Data Relations as well as pragmatic and personal reflections on this body of work and the agenda it advocates, as well as positioning the PhD in terms of its potential legacy and limitations.

10 Thesis Conclusion

“Our research should transform, not just inform, society.” —Kingsley ofosu-Ampong (researcher & lecturer in digital transformation)

This Digital Civics PhD has explored, from an constructivist, individualist, and pragmatist point of view, the nature of the power imbalance between individuals and those who hold data about them. It has built upon literary bodies of work in multiple disciplines including information theory, data rights legislation, surveillance capitalism, Personal Information Management, Human-Computer Interaction, Human-Data Interaction, Personal Data Ecosystems and Human-centered Design. In doing so, it identified a research gap around the lived experience of today’s data-centric world, recognising that there was a need to explore the role that personal data should play in people’s lives, and how the current power imbalance over personal data [2.1.2] affects people’s attitudes and capabilities.

Through the Case Studies in Section II it explored this research gap, reaching in Chapter 6 within Section III a clear answer to its research question [2.4] in terms of six wants that people have in their direct and indirect data relations; people want:

These six wants provided a basis for envisioning a world where people become empowered digital citizens with mastery of their own personal data.

Through its unique two-track approach of real-world project placements taking place alongside the participatory research, as described in Section IV, this evolving vision for better relations with and through data was tested, refined and scrutinised through a practical, sociotechnical lens that began consideration of a further question, of how the world might be changed towards this new vision. The result of this hybrid learning was presented as a new research agenda, Human Data Relations (HDR). HDR was defined in Chapter 7 within Section III, culminating in the expression of four clear objectives:

  1. data & information awareness and understanding;
  2. data & information useability11;
  3. data ecosystem awareness and understanding; and
  4. data ecosystem negotiability.

Through Section IV, this new research agenda was given flesh, by mapping out the understood obstacles to progress when pursuing these objectives, and providing four strategic trajectories for pursuing societal change towards better HDR:

What remains for this chapter to address, is to distil the learnings of what people want from data in their lives (and how their wants might be met) into a set of principles from which future researchers, designers and activists might be able to generate new research projects, social interventions, policies, or other initiatives [10.1], before reflecting critically on the research agenda [10.2] and upon this body of research in terms of its limitations [10.3], personal impact upon the researcher [10.4] and potential legacy [10.5].

10.1 Principles for Generative Action

In line with this researcher’s objectives [1.1.1], this PhD set out to have an impact upon the world with regard to tackling the power imbalance over personal data. Indeed, through publications, methodological contributions and industry adoption [1.2; 1.3] it already has. In order to maximise potential impact, this thesis will conclude not by providing a prescriptive roadmap for HDR, but by sharing a versatile set of principles that can be applied by all who wish to pursue this research agenda.

The principles are framed as principles for generative action, which refers to the idea of producing solutions that themselves stimulate further solutions, in line with the concept of generativity, that in psychology refers to the creation of something novel, valuable and meaningful (McAdams, 1996), and in sociology refers to a stage of adult development focused on productivity, creativity and contributing to society (Slater, 2003).

In recognition of the fact that the power imbalance over personal data is a sociotechnical problem, these principles are broader than just design insights for influencing UI design or software system specifications; they could also be applied at a societal level to influence funding and policy decisions, social interventions, business strategy, grassroots activism and more. As such, they are a tangible and actionable output of this research, routed in human-centric philosophy, participatory findings and sociotechnical reality.

10.1.1 Principle 1: Life Information makes Data Relatable

In the pilot study and Case Study One, data cards were used to represent civic data [Figure 3.7]. In Case Study Two and in Hestia.ai’s digipower investigation [See Section IV Introduction], a categorisation of provider-held data was displayed [Figure 3.8]. In my BBC research report (Bowyer, 2020), the use of relatable examples was identified as an important way to help people understand what a piece of data represents. Recalling that to make data meaningful, we must be able to interpret it as information [2.1.1], this can be refined further:

To make data meaningful, it needs to be expressed as information about your life.

Spreadsheets and ‘big data’ sound dry and (to many) dauntingly technical, but once those same datapoints are expressed as ‘facts about your life’, the hurdle of relatability is overcome [4.3.1]. The effectiveness of applying this principle is evident in successful online services like Netflix, Spotify and Strava, and in social media platforms like Facebook: these interfaces show understandable everyday concepts like Friends, Events, Movies and Playlists, not files, records, folders or database rows. They have successfully ‘pushed the technology into the background’, in line with Weiser’s vision (Weiser, 1991) and Rogers’ calm computing. While exploring this idea of representing life concepts further at BBC R&D, I produced Figure 10.1, which shows a near-exhaustive overview of the many different informational concepts in an individual’s life that providers might hold as data:

Figure 10.1: Life Concept Modelling

This diagram shows how most common personal data types handled today can be mapped to more relatable life information [7.1] concepts. These life concepts (exemplified where possible) can make data meaningful to individuals, and can help them find value in their data [5.5.3].

10.1.2 Principle 2: Personal Data Needs to be United and Unified

It is clear that better HDR involves recognising the splintered and scattered reality of where our data is [Lemley (2021); [8.2.1]] and moving beyond it. To make data useable11 for individuals, the diaspora must be united. This means that data from different sources must first be united—brought together—and then unified, which means making it into a collection of data about the individual and their life, rather than scattered slices of company data that may have secondary value to the individual. This is a multi-faceted sociotechnical problem of access, interpretation and integration [Li, Forlizzi and Dey (2010); 2.2.3]]. Negotiability remains important; we can only unite data that we can access, and only data holders can fully explain it [see 8.3 and 8.4]. Setting that aspect aside, the pragmatic way forward begins with creating a space where data can be held, combined, controlled and owned by the individual - ‘a place for your personal data’ [Jones (2011); [2.2.4]]. This can form the seed of their new human-centric personal data ecosystem. This follows Bergman’s subjective classification principle, as mentioned in 2.2.2:

‘All related items should be classified together regardless of technological format’Bergman, Beyth-Marom and Nachmias (2003)

We could add: ‘regardless of where they are held’. This vision is embodied in the Personal Data Stores (PDS) concept [2.3.4]. A PDS can brings together personal data from multiple sources that has never co-existed before. This enables the provision of new capabilities over one’s digital life. The BBC R&D Cornmarket project [See Section IV Introduction] examines how to build PDSs. In 9.4 I explored possible design approaches. At this stage, only the concept is important. Once data is united and unified, PDSs enable the creation of new views of data that were not previously possible, because code can execute across data that was previously dispersed. For example, today each separate TV app, device or streaming service maintains separate records of what you have watched. Once unified in a PDS, it would be possible to present you with a unified view of all the past content you had viewed, across all channels, as this mock-up I made during my BBC internship shows:

Figure 10.2: Mock-up of a Unified TV Viewing History Interface

10.1.3 Principle 3: Data must be Transformed into a Versatile Material

contrast to https://www.cambridge.org/core/journals/proceedings-of-the-international-conference-on-engineering-design/article/data-materialisation-a-new-undergraduate-course-for-a-data-driven-society/ACC92B9D4D5D97D5F418B95605BC1ED9

INSIGHT 3: Data Must Be Transformed into a Versatile Material
In Case Study Two [Table 5.4; Bowyer, Holt, et al. (2022), supplemental materials], participants expressed diverse goals for personal data, including reflection, pattern-finding, goal-tracking, and creative use. In the PIM space [2.2.2] relevant innovations include associative exploration, spatial arrangement, and embodied interaction for different contexts) Drawing on all of these, allows me to infer that unified data must be transformed into a versatile material. Individuals need to be able to use data—represented as facts or assertions about one’s life by performing manipulations such as:
(continues…)
Data as material will be new to most except data scientists. This is novel not just for end users but for designers too. Eva Deckers, in her work on data-enabled design, an approach to design which also calls for data to become a material, notes (and we could expand this to laypeople too):

“Designers are in general not trained and prepared to work with data. They’re not equipped with the right tools. Data manipulation is not part of the schools’ curriculum and designers are rarely interested in understanding data.”(Deckers, 2018).

(continues…)
Her work with colleagues on the ‘connected baby bottle’ illustrates how treating data as a design material creates a novel iterative user-centred development of new capabilities (Bogers et al., 2016). In HDR terms, I theorise that what this material should be is human information - life information and ecosystem information [7.2OLD]. Data useability therefore calls for the creation of systems that enable human information to be treated as a material.

10.1.4 Principle 4: Ecosystem Information is an Antidote to Digital Life Complexity

Add challenge to prior thinking: As senior Microsoft official Craig Mundie has said, ‘today, there is simply so much data being collected, in so many ways, that it is practically impossible to give people a meaningful way to keep track of all the information about them that exists out there, much less to consent to its collection in the first place’ (Mundie 2014) Mundie, C. 2014, March/April. Privacy pragmatism. Foreign Affairs. Retrieved from http://www. foreignaffairs.com/articles/140741/craig-mundie/privacy-pragmatism .

INSIGHT 4: Ecosystem Information is an Antidote to Digital Life Complexity
Acquiring ecosystem information and understanding is a key motivator for many people—encompassing 74% of participant goals in Case Study Two [Table 5.4]—and is essential for better HDR. This suggests two distinct goals for system builders: ecosystem detection and ecosystem information display as ingredients to help overcome the obstacle. As a representative example let us examine a recent app called SubsCrab [Figure 8.4]:

RENUMBER FIgure 8.4 to 10.3

Figure 8.4: SubsCrab: An Example Application for Ecosystem Detection and Visualisation
(continues…)
This app connects to the user’s e-mail account, and searches it and monitors it for e-mails from service providers such as Netflix, Spotify, Dropbox, or Google with which the user has monthly or annual subscriptions. In doing so, it is detecting part of the user’s ecosystem. It is identifying which companies they have a payment relationship with. It parses found e-mails to identify billing dates and payment amounts. It then provides additional representations of that ecosystem information to the user, so that they might get on top of their subscriptions, see what they need to pay (or cancel), and feel more ‘in control’ [Teevan (2001); 2.2.2] of this aspect of their digital life. From this example, it is easy to imagine other types of ecosystem detectors, which could detecting relationships with free services and websites, identify account numbers and e-mail addresses, password resets, address book syncs, OAuth logins, family identities and more. Alistair Croll and I explored possibilities for inbox scanning in 2009 (Croll and Bowyer, 2009), and while there has been some innovation in this space, it has largely been for commercial reasons (Braun, 2018). New ecosystem detectors could power new interfaces, contributing to the simplification of the user’s digital life. This would give people more visibility and control over their previously unmanageable data ecosystem.
A secondary consideration in achieving the required ‘sea change’ in approaches HDR, is that current PDS and SI approaches are very life-information-centric. It is implicitly assumed that the only way to unite data is to collect it. The difficulty in such an approach is that you can only collect that which you can extract. To address this, I draw inspiration from a computer programming concept known as pass by reference (as opposed to pass by value) (Ananya, 2020) where data is ‘pointed to’ rather than moved. Productivity guru David Allen recommends the use of ‘placeholders’ (Allen, 2015) to keep track of tasks you cannot otherwise bring into your planning. To build a complete map of a user’s ecosystem we must be able to keep track of accounts and data that are remote, much like a search engine points to information on different pages around the web. We can create proxy representations of service-provider-held or otherwise immobile data (e.g. data which is offline or restricted). These representations become part of the manipulable material in the user interface, and could be augmented with links to visit the remote source.

10.1.5 Principle 5: We Must Know Data’s Provenance

INSIGHT 5: We Must Know Data’s Provenance
Metadata is what gives information context. Context is critical to sensemaking [2.2.3] and enables good experience-centred design [2.3.2; 2.3.3]. Without context, data loses meaning [5.5.3]. Collecting historical data about the individual is important for reflection [2.2.3] and considered valuable [4.4.3], but knowing the history of a piece of data allows its context to be understood. Data is not neutral, and is inherently biased, since it was created for a specific purpose with a specific agenda in mind (Gitelman, 2013; Neff, 2013). To combat this bias, more context is needed. Significant research in this space has been undertaken by Professors Mike Martin and Rob Wilson at Northumbria University, formerly Newcastle University, who promote the idea of data with provenance; in other words:

Data must carry with it the details of why it exists, how it came to be, and what has happened to it since its inception.

(continues…)
Provenance should be communicated alongside any visualisation of the data, in order for it to be fairly assessed in context. Provenance is essential for data to be trusted, argues Martin, and should be quite granular: a piece of data should be attributed not just to an individual or organisation, but to the relationship between role-holding individuals in a specific context. Greater insights can be gained when considering all actions upon data as motivated communications from one party to another; only by capturing this information in-situ can the data’s meaning be fully appreciated (Martin, 2022). This framing essentially advances the concept of history tracking [2.2.3] into the sociotechnical, ecosystem-aware problem space. While everyday system designs have not approached this level of granularity, the importance of data provenance has been recognised in the PIM space. Temporal PIM systems [2.2.2] from Lifestreams (Freeman and Gelernter, 1996) to activity streams (Hart-Davidson, Zachry and Spinuzzi, 2012) rely upon data provenance in some form. A study by Jensen et al. concluded that provenance tracking can be valuable for identifying related documents, a critical part of knowledge work today (Jensen et al., 2010). Lindley et al. proposed the idea of file biographies, which view the lifetime of a file as something that should remain connected, so it could be traversed in order to understand the context of the file different moments of interaction (Lindley et al., 2018). This comes close to Martin’s vision but does not capture the motivation for each interaction. While provenance capture is not a solution in its own right to the understanding of data and of ecosystems, it is clear that data with provenance is very likely to be a valuable part of any design that aims to provide understanding of complex and invisible personal data ecosystems.

10.1.6 Principle 6: Data Holders use Four Levers of Infrastructural Power

Add figure 10.4 from BBC pres

INSIGHT 6: Data Holders use Four Levers of Infrastructural Power
Hestia.ai [ARI7.2OLD] have produced a model to explain the mechanisms by which technology companies gain power and use it to shape today’s digital landscape. In this model, infrastructural power comes from three things:
(continues…)
As organisations (especially platforms) collect more data, and grow in market influence or technical capability, they gain power over individuals and over other organisations. They exert this power using four ‘levers’. Simplified and expressed in the terms of this thesis, these are:
  1. Collect & Interpret Data to Acquire Knowledge: Data and signals are collected from individuals and interpreted in order to infer their intents and interests. For example, Google collects raw GPS and wi-fi hotspot data from mobile phones, which it then statistically analyses to infer which shops or venues you visited and what forms of transport you used, increasing Google’s knowledge about individuals and populations.
  2. Present Content and Configure Structures to Influence Individual Behaviour: Knowledge of individual intents and interests is exploited within user interfaces to influence desired individual actions. For example, Facebook or presents a user with a product relevant to their interests, which they are motivated to click upon, generating ad revenue. Another example would be Twitter manipulating the content of the user’s feed to show more tweets from conversation topics where they can show promoted tweets, increasing ad revenue.
  3. Configure Structures to Improve Knowledge Acquisition: A provider uses its dominant position to force other organisations to improve the provider’s ability to acquire knowledge. For example, Google provides free analytics tools to web developers, but requires the end users of those client websites to supply visitors’ data back to Google, increasing their ability to acquire knowledge about individuals and populations.
  4. Configure Structures to Disadvantage Others: Certain providers (typically of operating systems or popular devices) can configure the structural relationships between other parties. For example, a smartphone manufacturer could limit data exchange between other apps, while still extensively collecting data signals themselves, such as when Google was found to be collecting call history from Android’s dialer app.
(continues…)
The precise mechanisms and techniques employed when exerting infrastructural power, as well as the social and market consequences of these practices, are explored in detail in Hestia.ai’s digipower technical reports, of which I was a co-author (Bowyer, Pidoux, et al., 2022; Pidoux et al., 2022).
The research highlights that providers’ power is far greater than many realise. Unlike in the physical realm, providers of popular online platforms can reconfigure the landscape to change the way that individuals perceive reality, in line with the powers of interpretative influence, behavioural influence and socially shaped power described above (Bowyer, Pidoux, et al., 2022). Providers control the extent to which (if at all) data stored behind the scenes, and internal processes that use that data, are visible, and how data and processes are represented.
The model shows that the accumulation of data (and hence, information) is implicitly and objectively a form of power, consistent with participants’ observations in 5.5.4. As long as current service providers are free to collect so much personal information, the information landscape is likely to remain imbalanced and individuals will not be able to acquire ecosystem negotiability.
This insights shows that the most powerful data holders exert huge influence over the digital landscape, in terms of what is knowable and what is doable. HDR reformers’ abilities to balance the landscape are hindered by the fact that they are operating in a landscape that the incumbent platform and service providers effectively control.

10.1.7 Principle 7: Human-centred Information Systems must serve Human Values, Relieve Pain and Deliver New Life Capabilities

INSIGHT 7: Human-centred Information Systems must serve Human Values, Relieve Pain and Deliver New Life Capabilities
Through work at BBC R&D exploring how to better connect people with their data, it became clear that there is a way to combat such indifference and apathy of users. It emerges from the realisation that the way people find value in data is to connect it their lives. The more that people see relatable life information and can imagine ways to harness that information in their everyday life, the more motivated they will be. BBC R&D conducted research (Forrester, 2021) that identified fourteen specific Human Values that people seek to satisfy in their lives, which are shown in in Figure 8.7. These are, at the most abstract, goals that people care about in their daily existence.

RENUMBER FIgure 8.7 to 10.5

Figure 8.7: Human Values, as Identified in BBC R&D Research Funded by Nesta
(continues…)
Given these and the earlier observation that life information is what makes data relatable, the insight I offer here is that the way to make people care about their data is to use it to help them in their life. By starting with a focus on a user’s world, one can then focus in on their life, and then the data that represents elements of that life. Then, the individual has a vested interest. Systems and features should be designed from this life-centric perspective. This is known as value-centred design (Reber and Duffy, 2005) and it has been argued that this should become the guiding design philosophy in HCI (Cockton, 2004). And to offer true individual value, all human-centric system designs must also consider context [2.3.2], environment (Abowd, 2012) and experience [3.2.1]. In business modelling, there is a tool called the value proposition canvas, which identifies three ways of conceptualising value: gain creators, pain relievers and jobs-to-be-done. Informed by these concepts, we can design better human-centric functionality that relieves an individual’s pain points, helps them complete their tasks, or offers them some gain over the status quo. In the HDR space, given the lack of existing tools for digital life management, we have the opportunity to create quite a unique type of gain: new capabilities over your digital life that you have never had before. This ability to do new things has been identified as key ingredient of user empowerment (Meschtscherjakov, Wilfinger and Tscheligi, 2014; Schneider et al., 2018). As 2.1.4 and 2.2.2 showed, a range of novel capabilities are needed for effective PIM.
Here is an example of what this value-centric approach might look like in the HDR space: Myself and BBC R&D colleague Jasmine Cox imagined focusing on address books and contact lists as a strong relatable starting point to generate demand for a human-centric interface. This could provide people with new life capabilities while also relieving pains. Many people have address and contact information scattered far and wide, and face a complexity they cannot easily manage when it comes to the automated syncing and sharing of potentially sensitive contact information between devices, apps and providers. Developing human-centric personal information management capabilities to bring that messy situation under control would offer a clear and tangible benefit to users. In Figure 8.8, we show how there could be a strategic path, beginning with detecting ecosystem and life information from the individual’s calendar and e-mail inbox, through to building up to more holistic life-level PDS capabilities.

RENUMBER FIgure 8.8 to 10.6

Figure 8.8: A Contact-and-Calendar-centric PDS Approach
(continues…)
A helpful example is that of a vacation from my 2011 article (Bowyer, 2011) and shown in Figure 8.9. Today, all the information around such a holiday is scattered into multiple systems - emails, online provider bookings, chat logs, cloud synced photos, web browser bookmarks, smartphone location logs, etc. It is not hard to imagine that a system that was able to bring all related information about that vacation together in one central interface (mock-up in Figure 8.10) could deliver huge value to users and be very compelling. Such context-targeted human-centric offerings can have a much greater chance of generating interest and impact than offerings that merely allow you to ‘organise your data’ or some other abstract phrasing not rooted in everyday life.

RENUMBER FIgure 8.9 to 10.7

Figure 8.9: The Scattered Data Relating to a Vacation

RENUMBER FIgure 8.10 to 10.8

Figure 8.10: Mock-up of a Unified Interface for a Vacation

10.1.8 Principle 8: We Need to Teach Computers To Understand Human Information

INSIGHT 8: We Need to Teach Computers To Understand Human Information
In order to move towards standardised ways to store and unify personal data from multiple sources, computer systems must be taught to understand the information within the data, and how it relates to an individual and the world. This moves beyond just capturing data provenance: put simply, computers need to understand human information. They need to move beyond files (Bowyer, 2011) and databases, and begin to perform operations on human informational concepts, and to associate those concepts according to what they mean - i.e. semantically. This is a preliminary step that will enable the building of systems and interfaces that are able to deal in human concepts and represent the elements of everyday life.
We need to store semantic context and semantic associations, i.e. the meaning of things, not just raw bundles of data. This is advocated by the Web’s inventor Tim Berners-Lee in his vision of a Semantic Web (Berners-Lee, Hendler and Lassila, 2001) and by other proponents of networked and semantic PIM systems [2.2.2]. There is a need to develop standard ways to digitally model facts and assertions about users’ lives, so that those disparate pieces of data can be unified, connected, correlated and compared. Some standards are emerging, such as data shapes (‘ShapeRepo: Make your apps interoperable’, 2022). The extraction of meaning from data has a business domain all of its own. Sizable industries have built up around Content Analytics and Enterprise Content Management. But to consider the problem at its simplest level, I offer this insight: Through the capture of metadata at the point of data recording, and through subsequent programmatic analysis of stored data, as illustrated in Figure 8.11, we can begin to teach computers what the data we store represents.

Systems that store data without any understanding of what it represents in the physical/human world will be less powerful and versatile than those that we can train to understand our world. The emergence of Large Language Models (LLMs) like ChatGPT in late 2022 show a promising trajectory here, but also carry great societal risk as they only understand human language, not mathematics, logic, science, or how to distinguish truth from opinion, all of which will be critical if accurate, reliable and trustworthy human-centric life interfaces are to be developed.

renumber figure 8.11 to 10.9

Figure 8.11: Annotating Data with Semantic Context

|Machine learning technologies and Artificial Intelligence have pushed machine understanding of human words, images and content to impressive levels in recent years and such technologies can certainly be helpful, but in fact at the core what we are talking about here is something much simpler than AI; It is simply about automatically labelling datapoints in as many different ways as possible (using a similar principle to lifelogging) so that those datapoints can be associatively retrieved from many different angles, and providing humans with ways to amend incorrect labels and to reclassify data or apply new semantic associations. Such approaches are in their infancy, and have not yet been adopted extensively in commercial settings. Issues of interoperability for PDS systems are being actively explored and developed in the Solid community (Bansal, 2018; Berners-Lee, 2022) in pursuit of a decentralised web (Verborgh, 2017). |

10.1.9 Principle 9: Individual GDPR requests can Compel Companies to Change Data Practices.

Position that this is designing at sociotechnical level

INSIGHT 9: Individual GDPR requests can compel companies to change data practices.
In this inset box, I will explain how one person can apply the discovery-driven activist approach to compel a multi-billion-dollar international data-centric organisation to improve their HDR.
As an avid user for several years of the music streaming service Spotify, I have built up a large library of playlists. I was interest to build an app using my listening data, so made a GDPR request to get a copy of my personal data. When I received that data, I was disappointed to find it was not suitable for programmatic use, because the tracks in my listening history were identified not by any unique identifiers such as spotify:track:4cOdK2wGLETKBW3PvgPWqT which I could use to construct clickable song links, just by freeform text strings. Through a long and complicated saga, explained in detail in ARI9.1, which involved much persistence and sending over 30 e-mails in an eight-month period, I was ultimately successful in getting Spotify to improve the format of their GDPR data returns, not just for me but for all customers who make GDPR requests in future. I had proven that one individual can use their GDPR rights to exert power over a corporation, with persistence.
A larger scale example of individuals forcing giant corporations to change is seen in the case of Facebook. In the early 2010s, Austrian lawyer Max Schrems began to pressure Facebook to disclose more personal data to their users. He created a tool to enable people to make their own data access requests, which over 40,000 people used. Faced with an overwhelming volume of work and massive liability of future data access requests, Facebook was forced to launch the self-service Download Your Information (DYI) download tool, increasing transparency for all Facebook users worldwide (Solon, 2012). Facebook was forced to increase its transparency further when Paul-Olivier Dehaye (now CEO of Hestia.ai) made a GDPR request (later backed by legal action) to force Facebook to disclose more information about which advertisers Facebook had enabled to target him using the Facebook Custom Audiences feature. Apparently in order to avoid being embarrassed in court, Facebook updated DYI so that every user’s downloaded information includes a list of advertisers who have added you to a Custom Audience (Dehaye, 2017). Dehaye and Schrems both continue to act as HDR reformers and civic hackers following the discovery-driven activism approach, through their organisations Hestia.ai [ARI7.2OLD] and privacy rights organisation noyb.eu (‘none of your business’) (Schrems, 2017) respectively.

10.1.10 Principle 10: Collectives can Compare and Unify their Data and use their Pooled Knowledge to Demand Change.

Position that this is designing at sociotechnical level

INSIGHT 10: Collectives can compare and unify their data and use their pooled knowledge to demand change.
Increasingly, the Internet that each individual experiences is not the same as that experienced by anyone else. Thanks to recommendations, targeted ads and social media feeds personalised to your interests, no two people will see the same digital reality. This makes it much more difficult for regulators or individuals to hold digital service providers to account than their analogue counterparts. In recent years, many activists have embraced the power of collectives to try and fill this regulatory gap, realising that together, they can discover far more than they can alone, and that through such collaboration there is an opportunity to improve awareness of digital providers’ practices.
An example of this is the WhoTargetsMe project, launched in 2017 (Jeffers and Webb, 2017). The objective of this project was to monitor political advertising in the UK. Recognising (as larger studies have shown (Bakshy, Messing and Adamic, 2015)) that everyone was seeing different advertisements, the goal was to have each individual report what adverts they see on Facebook, so that these can be pooled and compared with others. Over 50,000 people participated, building up an otherwise unavailable picture of the ways in which different political demographics were being targeted. This is a powerful mechanism available to collectives in this space: the ability to have individuals obtain their own datapoints and then compare them.
Another example is seen in the Worker Info Exchange (‘Worker info exchange’, 2022), a collective that helps gig economy workers such as Uber drivers and Deliveroo riders to make data requests. Using the pooled data, they conduct investigations to understand algorithmic inequalities and identify unfair treatment of worker by employers. They then help those workers to fight for better working conditions, much like a traditional trade union, but powered by collectively-sourced data. This resulted in Uber being taken to court, and some gains being made for drivers (Lomas, 2021; Foucault-Dumas, 2021).
As the aforementioned case with Max Schrems showed [Principle 9], collectives can be particularly powerful when exerting their data access rights en masse, and this can improve HDR and force greater transparency. René Mahieu and Jef Ausloos have published an exhaustive list of collective actions taken using GDPR rights, addressing issues such as discrimination by US colleges, corporate surveillance of climate activists, identifying gaps in data disclosures, and manipulation of users on dating apps (R. Mahieu and Ausloos, 2020). The authors identify that the GDPR provides an architecture of empowerment and have called for better enforcement and for European authorities to provide better support for the ability for collectives to make data access requests together (R. L. P. Mahieu and Ausloos, 2020). Hestia.ai’s digipower investigation [ARI7.2OLD] concluded that data-discovery driven collectives are a vital step on the road to a more digitally empowered society (Pidoux et al., 2022, p. 70). It is clear that organised collectives exploiting data access rights represent a powerful vector for impactful discovery-driven activism.

10.1.11 Principle 11: Automated Entity Identification could Enhance Machine Understanding and Unburden Life Interface Users.

INSIGHT 11: Automating the Identification of Entities can enhance Machine Understanding and Unburden Life Interface Users.
Having identified the need to assign every piece of a user’s life information to a particular partition (or multiple partitions) of their life, it quickly becomes apparent that this would be too much work for the user to do alone. Systems that use manual categorisation and tagging to classify information work best with a large userbase to contribute effort to the classification operation (Golder and Huberman, 2006). As part of the explorations of PDS approaches at BBC R&D, I therefore also examined how this challenge might be addressed (considering also that effort could be a deterrent to adoption [Objective 5 [8.5]]). I identified an approach that could help with this problem: If the entities (for example, a person, a place, an event or a topic) associated with a piece of data can be programmatically identified, then a lot of the assignment of data to life partitions can be handled automatically. For example, association with your office location would indicate that any data associated to that location is likely to relate to the ‘work’ part of your life, and this could be done automatically, reducing the effort for the PDS user. The process of identifying entities within data, known as entity extraction or named entity recognition (NER) is a well-established technique, which relies on the trained recognition of proper nouns and keywords combined with the statistical analysis of sentence grammar (Marshall, 2019). This technique is used extensively in text-mining products within the Content Analytics industry such as those produced by my former employer, OpenText (formerly nStein) (‘What is text mining and content analytics?’, 2022). However, in the context of a PDS, I propose that new techniques can be applied, making use of the data touchpoints into different parts of an individual’s life to identify entities relevant to them personally (including, for example, names of friends or private projects that a standard NER solution would not detect). Data is full of references to entities that have personal relevance in your life. Finding these allows meaningful metadata to be attached to each datapoint. Figure 9.11 shows how a large number of entities could be detected from different parts of an individual’s data once it has been imported into a PDS environment:

renumber figure 9.11 to 10.10

Figure 9.11: Identifying Entity Associations in Data
(continues…)
This sort of approach could be quite powerful in reducing the effort for life interface users. By scanning the data, the most prevalent entities could be identified, and the user need only assign the entities to different parts of their life, as illustrated in the first two frames of Figure 9.10. This would then allow hundreds of associated data points which had been programmatically associated to that entity, to be assigned to the correct ‘bucket’ or life partition. I was able to prototype this technique successfully to prove the concept [ARI7.1OLD].
While such an approach would not be perfect, and there would need to be some corrections made by the user, this is far preferable to them having to provide all the classifications and is likely to motivate greater engagement. I have observed in user experience design and consideration of productivity systems that users are more motivated to correct errors, than to fill in a blank page.
Philosophically, we are moving here towards a learning system, a system that can be told when it is right and when it is wrong, and get better at classifying things correctly, analogous to the way an executive might train an assistant to anticipate his/her needs better, a sort of digital life assistant (Bowyer, 2018). Bayesian classification techniques could also be used to help with the learning here (Authors, 2022). This approach is also useful for ecosystem detection–as outlined in Principle 4–as identification of relationships with external entities is a key first step to mapping a user’s ecosystem.

10.1.12 Principle 12: The ‘Seams’ of Digital Services need to be Identified, Exploited and Protected.

Position that this is designing at sociotechnical level

INSIGHT 12: The ‘Seams’ of Digital Services need to be Identified, Exploited and Protected.
As identified in 8.4.1, product design (be it hardware or software) is political. Designers pass some power to the user through their design, but also, users should be able to take some power on their own terms. This is the case made by Cristiano Storni in his 2014 paper on the politics of seams, Cristiano Storni identified the idea of empowerment-in-use which advocates the idea that people need to appropriate their technologies to different uses that the designers may not have foreseen (Storni, 2014). This is blocked by current black box, limit-what-the-user-can-do thinking. Central to this capability is the concept of seams - those exposed areas which the user is free to change. This concept was proposed by Mark Weiser and developed by Chalmers and others (Weiser, 1994; Weiser and Brown, 1997; Chalmers, MacColl and Bell, 2003). Changes such as closures of APIs or removal of ports [8.4.2] can be seem as the removal of seams. As Storni highlights, the availability of design seams is a critical determiner of user power. Companies gain power and reduce agency when they remove or restrict activity at seams. It follows that by identifying, exploiting and protecting the seams of digital services and devices, user autonomy and the viability of data-unification efforts can be protected.
An unseen battle is for the free flow of information is underway at the seams of today’s digital products.
Hackers, civic activists and makers seek to repurpose and exploit the edges of products for their own means, while digital service providers and platforms try to block such activity. For example:
- A successful tool called Findings allowed people to clip and share their favourite quotes from Kindle books. Amazon blocked and banned this tool, and the company shut down (Owen, 2012; Maldre, 2012).
- Louis Barclay created a tool called Unfollow Everything, which allowed Facebook users to automatically unfollow all friends and pages, in order to give them greater control of their News Feed reading experience and avoid being manipulated into reading more than they want to. He was banned for life from Facebook and threatened with legal action should he ever build any tools that manipulated the Facebook experience (Barclay, 2021).
- Various activist groups have for several years been fighting to give individuals the legal right to repair their own products (Miller, 2021), which has often been blocked through planned obsolence, inaccessible seams or restricting access to parts. The problem has been described as device tenancy, the idea that our relationship with our technology products is more like a tenant, where a landlord retains overall control and permits us to perform certain activities (Tufecki, 2019). New laws have been introduced in the EU (Tett, 2022), forcing companies to support individuals to repair their devices. Apple has subsequently released self-service repair kits, though these themselves force parts to be paired with particular phones, limiting the utility of self-repair (Moore-Colyer, 2022).
- As detailed in my co-authored paper with Louis Goffe and colleagues (Goffe et al., 2021, 2022), web extensions and web augmentation offer a powerful technique for modifying web experiences and repurposing user interfaces. This is because once a website is loaded into your browser, it is no longer under the control of the remote site, and by creating a web extension to run code within your local web browser that loaded website can be edited, scraped (P., 2021), or otherwise repurposed. This has been successfully used to stop clickbait, dispute fake news, combat addiction, filter explicit words and more. However, in order to re-assert control over these customisations, Google has announced changes to the way Chrome extensions will work, which could ‘stifle innovation’ and limit what developers can do within the web extension (Miagkov, Gillula and Cyphers, 2019).
- An example from 2016 shows how seams can be exploited to obtain information and increase transparency. By brute force querying of a Facebook API, researchers were able to identify a complete list of 282,000 interests on Facebook and identify the relative popularity of each interest. (Havlak and Abelson, 2016).
- A number of HDR reformers, myself included, had identified a new seam for subverting some of Facebook’s control over how its content is consumed [8.4]: accessibility tags or ARIA tags (Various Authors, 2022). These are specially marked-up tags in HTML web pages used by screenreaders to display or read content in a more accessible way for partially-sighted or blind people. Because these show page content in a standard format (whereas the HTML of most web pages varies widely and often changes), they present a reliable way to more easily scrape content from the loaded web page within a web extension. In experiments at Open Lab, posts were successfully scraped from friends’ feeds (which Facebook do not make available anywhere except the News Feed) so that they could be consumed separately in a more human-focused user experience. This technique has been used successfully to monitor Facebook ads by NYU’s Ad Observatory (Watzman, 2021), and was used by WhoTargetsMe [Principle 10]. In 2021, Facebook was found to have deliberately obfuscated content within ARIA tags to prevent such investigations, resulting in visually impaired users being unable to differentiate ads from posts, and hearing junk characters read aloud. This can be taken as an adversarial stance against researchers, activists and HDR reformers, and shows that companies like Facebook will go to extraordinary lengths to assert their dominance and reduce user agency (Faife, 2021).
- One reason why many companies and services have produced apps is because these are much more locked-down and controllable than the web browser environment; there are fewer seams. However, adopting the same philosophy as using web extensions to modify web-based experiences, and drawing on data flow auditing technologies like TrackerControl [9.3] researchers at Oxford University have now developed techniques by which mobile apps can be reverse-engineered and modified to change user experiences to better meet users’ needs, offering the promise of a right to fair programs (Kollnig, Datta and Kleek, 2021).
These examples make it quite clear that Storni was right: product seams are the place where control can be asserted or regained. They are the setting for an ongoing battle for the freedom and integrity of today’s information landscape, and it is important for HDR reform that this space is specifically targeted. The role of the HDR reformer here is twofold:
1) To surface information injustices, especially the closures of seams.
2) To push or ‘hack’ the seams to gain transparency and re-assert control, including gaining access to otherwise inaccessible data and to acquire new functionality.
In this context, the work of whistleblowers such as Frances Haugen (Horwitz et al., 2021) and Edward Snowden (Macaskill et al., 2013) is particularly validated and important. Whistleblowers can expose internal practices that harm the information landscape’s integrity that are not otherwise visible. In order to hold online platforms to account, the public must be aware and able to attribute any restriction in freedom or information access to the correct source. They need to know that the information or functionality is being modified or restricted. These ideas are explored further in (Bowyer, 2017). Seams should be much more in the public consciousness than they are.

10.1.13 Principle 13: We could Demonstrate Business Benefits of Transparency and Human-centricity

It is Possible (and Necessary) to

INSIGHT 13: It is Possible (and Necessary) to Demonstrate Business Benefits of Transparency and Human-centricity
As outlined in 8.5 and in this section, it is essential that work is done to persuade data-holding organisations of the benefits of moving towards the new paradigms outlined in this thesis. The following avenues for possible future research and advocacy toward data holding organisations have been identified:
- Trust & Reputation: In line with the third public relations-like aspect of HDR [7.2] as well as the recommendations in 4.4.4, 4.5.1, 5.6.2 and 6.2.1, displaying a more inclusive, open and supportive attitude to data handling could strengthen the service relationship and increase customer loyalty and trust. Organisations that are seen to have good Human Data Relations are preferred.
- Consent: In the wake of the GDPR, ensuring consent is becoming an increasing concern to organisations, and the risks of legal consequences for mistakes are high. It makes sense that a more dynamic [Bowyer et al. (2018); 4.5.1; 5.6.2; 6.2.2] consent approach that involves individuals [6.2.3] and keeps them in the loop would enable them to speak up much earlier and express consent wishes that might otherwise go undetected.
- Accuracy: The best-placed person to spot errors in data’s accuracy or fairness is the individual about whom the data is concerned. Therefore, increasing their involvement is likely to improve the quality of the data, especially if additional data is contributed or curated by the service user [4.4.3, 6.2.3]
- Liability: In an increasingly litigious society, storage of personal data, especially health or financial data, is a significant liability for businesses, especially if something goes wrong. Investment in human-centred personal ecosystems would outsource the storage of sensitive data to data trusts or PDS providers, reducing liability for the service business. By ensuring that data is accessed only in ways that are centralised outside of the business and remaining in the user’s control—such as PDS company digi.me’s Private Sharing model (digi.me, 2019; Bowyer, 2020)—organisations can ensure that have negligible risk of mishandling customer data.
- Better Customer Targeting The most radical, but perhaps the most persuasive, business model relating to better HDR, is the VRM approach [2.3.4], where individuals express their own service or product desires explicitly, which vendors then respond to. This turns traditional models inside out, and would empower users more, but due to the inherently improved accuracy of a self-declared interest, might also give businesses a greater confidence that their investment in converting those customers to a sale would be worthwhile. It is important to remember that the current drive towards collecting more data that drives the platformisation trend is in order to improve ad targeting, so that businesses can get a better return on their investment. A VRM approach, or any other approach where the individual contributes improved data to their data self, is in line with that current business objective.

Add wrap up to this section here

10.2 Pragmatic Reflections on Pursuing Better HDR

This section will reflect upon the proposed objectives and approaches towards better HDR laid out in Sections III and IV, from a pragmatic and broad perspective. One of the first things to note is that while Chapter 8 identified many of the clearest obstacles to the HDR objectives, it should by no means be considered exhaustive or complete. It is important to recognise the uphill struggle that faces anyone pursuing an HDR agenda, and that there are many forces working against the HDR reformer, including but not limited to commerce, resistance to change, insufficiently effective regulation, insufficent funding, technical challenges, disinterest by almost all parties, and the normalising effect of the status quo. HDR requires not just a halt of current data-centric trends but an about-turn and pivot to a completely different model of organisational personal data handling. In the Principles, insights and Approaches presented in Sections IV and V through which I attempt to operationalise the HDR agenda, I lay out a direction of travel, highlighting the actions that appear to be the best chances to succeed, not a complete answer. The HDR agenda is explored both from within the system, and from outside the system, and it is possible that even with parties pursuing both conformist and activist approaches in parallel that this may not be enough to bring about the desired change. Nonetheless, the cause of better human-data relations is clearly a worthwhile one for society.

The challenges of better Human Data Relations are intrinsically sociotechnical. They cannot be solved by technological design, by law or by social change alone. This was one of the reasons why it was important to lay out a variety of approaches in the recommendations from the Case Studies in Chapters 4 and 5, and in the HDR approaches in Chapter 9. One benefit of this, however, is that practitioners from many different fields ranging from technologists and designers to journalists, activists and civil servants should be able to find something in the Principles or Approaches that they can build upon. One of the underlying needs identified in the findings of Case Study Two [5.6.1] but also where Chapter 9’s approaches hope to influence governments, is that changes in law are required. Like other jurisdictions, both the EU [9.5] and the UK are, at the time or writing in 2023, developing new or changed laws that will affect the use of personal data online; but their interests are not only to protect their citizens, but to help businesses too, not to mention any political interests of those in power. As such, we cannot expect that all legal changes will align with the HDR agenda; in fact, in the UK there are serious concerns that new data protection laws set to replace GDPR following the UK’s exit from the EU could ‘seriously weaken data protection rights’ and cause harm to marginalised communities (Cowburn, 2023). Even the existing GDPR laws, which are designed from a more individual-centric perspective, fail to deliver sufficient benefits in practice, as Case Study Two showed [5.5.1]. So, it is important to acknowledge that while regulation is one of the most promising routes to achieving better HDR, it may not happen and it may not be sufficient even if it does happen. It would also require a more paternalistic attitude to protecting individuals, which goes against the libertarian trends of the early 2020s political landscape in the West.

ADD NEW SECTION ON PRAGMATIC REFLECTIONS. starting by saying what C8 did and didn’t do in terms of obstacles etc. add some words highlighting that there are very difficult questions, but that by operationalising HDR i show some of what is possible, not an answer to these questions but a direction of travel. talk about viability and the need for these practical endeavours to acknowledge the challenges i identified. notes: Resolving this tension between what is wanted, what is possible (technically) and what barriers are organisationally/legally is interesting. The contribution here is showing what is possible and how to get there. Whilst I also like the optimism about how to leverage law and design to support and help citizens, still need for more criticality about viability of these routes in practice? How do we balance the paternalism with law, and values defined as societally beneficial e.g., transparency, accountability, privacy, the local desire for these to be realised, as shown in your data, and the reality of surveillance capitalism and ‘if you’re not paying, you’re the product’? write more about HDR literacy and whose responsibility it is (ref recursive public, policy, designers - all designers should consider this - etc). I need to more explicitly talk about the burden/impact on the citizen (as Dave said, use the accountant analogy, etc, or environmental impact advise, or mechanic, etc)). Notes: HDR literacy - again, how much of this should be pushed to individuals, e.g. to rely on their legal rights and seek portability/erasure etc? How much is this failure of system/service design where this should be done by default and implemented to prevent putting burden on users in first place?
More notes from examiners:
Usability point - what support for ‘what next’ does HDR provide – once individuals have oversight, what then? Is that enough, or is this also about challenging current business models? Whilst choice is good – it does push debate to what are they choosing in relation to? And do they have capacity, time, energy, means to really choose (like with consent?) Wary of pushing too much back to the citizen to deal with, when these are structural design decisions around service delivery or business model, and regulatory failings (often as a result of lobbying during legislative process).
Section about Feasibility / Practicality? A Challenging Road Ahead / Designing the Future. more specific discussion on the challenges of unification and liabilities. make clearer the relationship between law and desired change, and how to balance needs of different parties.notes: Given the focus on GDPR, how much do you see law as enabling or inhibiting changes in design practice and balancing value conflict? e.g., process transparency around data processing vs use of IP rights to limit AI transparency? or individual oversight (which may be limited in its utility) vs collective oversight or enforcement authority ? . notes: The idea of unifying data is interesting as whilst it could add value from a usability perspective, giving an understanding of data insights (p243)– it prompts questions though around DP realities of linking data, and roles of different controllers/processors around managing their various obligations if it is unified e.g. risks of new identification harms/connections being made that otherwise were partitioned. legal vs reality difference: How do we align what people want with governance and sociotechnical structures that shape what they can have? For example, many of the requirements elicited from fieldwork broadly reflect what we would hope would be best practice from many legal frameworks (GDPR, ePrivacy Directive, Consumer Rights Act, DSA etc), yet power structures around services are such they are not realisable in practice? I am not sure what is actionable for me here in this c1 feedback point. Ask the examiners? highlight the design problem. The future work section should frame some of what i have offered in C6 as a design problem (informed by the insights). notes: What mechanisms are there to support individuals navigating their data in an everyday way and how can design respond to that complex problem? HDR seems well placed to contribute here not just defaulting to XAI (explainable AI) type approaches which may have little value for individuals? Notes on how we can apply HDR to design practice: There can be a clear disconnect between what law may frame as being protected vs what is possible in practice… How do we reconcile expectations/wants with reality of what is feasible/aspirational/implementable? What role can designers play – i.e., how can we apply HDR to design practice? This involves considering the different roles of HDR from activist adversarial orientated through to establishing areas where greater translation between design and law is needed to supporting compliance with GDPR. Good point from discussion around inadequacy of what GDPR seeks at times too e.g. data changing all the time – copy is not working – data access should be around person understanding of data… if bundle of files this is static and loses nature of data being interpretable and relational. consideration of e.g. pseudonymous data, CCTV, AI emotion identification, etc and how that relates to PD (I would argue, still needs to be made understandable, however hard). Notes: Attending to importance of specificity of law in defining the problem space for HDR. 2 examples. Personal data (PD) is a term of legal art - if there is not identification/singling out of individuals then it might not be PD, and therefore no rights under GDPR. This occurs with data driven systems where intent is not identification or unclear if PD being processed and therefore scope of protection limited… e.g. emotion sensing AI systems. How does your work deal with that tension in limits of the law, and also of what people may want or feel is there data may not in fact have the legal protections they desire around it? What if they circumvent Consent? I need to write somewhere about ‘alternatives to consent’ within chap 1 (indeed I reference it in TikTok example and SILVER). Notes: Lots of discussion around the role of consent (and issues of it being one time; severance model etc), but recall consent is only one of the legal grounds for data processing. What do you do when that is not the legal grounds for processing? How does this impact HDR? There are many others (fulfilling contract; public duty; legitimate interests). Thus, be careful that whilst consent may encourage a more relational dynamic (despite its current flaws), these other grounds could be used, and indeed could be seen as being designed in some ways to minimise input of subjects and to give a legal justification to processing for controllers, without need to engage with subjects.

risk of people having to sacrifice data, no choice. (ref to C5?) individual data being lost in a big data whole, and how that doesn’t absolve the company of responsibility, or the individual of having an interest (maybe use an example like car traffic data… or CCTV.. how could it be accountable / providence?). Notes: Related to this, with the uncertainty, opacity etc around how data is consumed/used in organisations (training data for models etc) - how does this implicate the model of human data relations? how do we have relations when the data may be a mere weighted variable in a larger decision making process? Does this lead to apathy for direct relations and concern around indirect relations and what does HDR offer in dealing with this? Notes on operationalising: Reframing that this is not PAR, due to lack of feedback loop from AR interventions. There is iterative process. Actually your approach can be justified as something novel in its own right due to your placements helping you understand how to operationalise HDR. Need to unpack the lessons around operationalising.

10.3 Limitations and Future Work

ADD NEW SECTION ON LIMITATIONS AND FUTURE WORK. Notes: Need for a Future Work Section which reflects awareness of topics that are pertinent for the PhD but have not been explored and will not be in depth e.g. awareness of value sensitive design for notion of publics; awareness of limitations of individualist nature of DP law/PIMs and recognising need to consider role of HDR in addressing collective/group interests; recognition of relationality of data vs individual framings in law.

add bit about potential of cards - You could have made more of this development too as a novel approach if you wanted to, as a toolkit for engaging empirically around policy/law (as these tools are lacking, despite calls from legal frameworks to support design of more legally informed systems e.g., GPDPR data protection by design and default).

community engagement/reflection. (but can say, work like this is happening e.g by others with Hestia) (in fact this could be a really good next step for future researchers - workshops to validate and build on my findings ACROSS participatory and industrial. Also consider individualist stance of author vs what is lost by not appealing to grassroots communities and closing loop. Constanza Chock ref. focus on GDPR is limiting. Notes: How do you balance interests of an individual around their data and the collective nature of data? Particularly as law is not very good at protecting group data currently (GDPR has very individualistic rights). This is an opportunity for design and HDR to play a greater role, where there is currently a gap due to law. talk about limitations of indiv. approach. explicitly say somewhere that focus on GDPR is limiting, data is inherently collective (and given meaning by e.g. how people are grouped/compared against others, relationships and interactions that are arguably data of multiple people, etc). (or is this more case study two). Group rights. Examiner comment:
Notion of empowerment does align with many liberal democratic ideals around individualised human rights and enabling rights (e.g. DP / Privacy). Given data is often social and interpersonal, pertaining to multiple individuals, it is not just their own data. The individualised nature of data protection rights in particular are increasingly stretched to resolve collective harms (e.g., around categorisations/sorting from algorithmic systems) not currently not dealt with well under individual rights framings – due to limited scope for collective action. How does this map onto your framing of ‘a desire for empowerment’, to help group rights and responses to undesirable ways of interacting with data? this is part of supporting empowered digital citizen(s). (you recognise this point on p182). With individualism – empowerment aligned to individual (acknowledge that others exist – collective rights etc) – point to future work – limitations of individual to empowerment – harm to collective (e.g. Group Privacy – Taylor et al book) link (also I have emailed the library)
Further work, acknowledge limitations of Human Centric Futures – what about ‘more than human design’ here? Going beyond the individual, human centric approach – what about groups, environment and data in its ecosystem of processing/collection/storage/management and wider social context? Flag for further work section literature from Paul Coulton; Elisa Giacardi; Ron Wakkary. Acknowledging the limitations of feedback loop not closed with PD done in Part A with Part B. Flagging that Part B is derived from industry/internships, has different value to if this was just another study as part of more conventional structure that may have followed from Part A. add additional text about ‘data processor’. Appreciate desire in thesis to avoid terms of legal art such as data subject, processor and controller for individuals and data holders – worth reflecting on what is lost in not using these? For example, specific obligations faced by controller’s vs processors in GDPR? Limits in the use of terminology like personal data/controllers which highlight gaps in protection cf domestic data controllers and impacts for individuals who may be subject to GDPR responsibilities running IOT in their homes link Relatedly, with framing of the activist led nature of this work – given nascent nature of this potential community, need to flag engagement had been done with these groups more overtly as a movement (not just as research – as the author makes this division). add something reflexive, about having had contact with/been part of activist movements (esp Hestia I suppose) add reflections on process of feedback between industrial experiences, development of HDR framework as that is why you approached problems the way you did. highlight the design problem. The future work section should frame some of what i have offered in C6 as a design problem (informed by the insights).

10.4 Personal Reflection

UPDATE TO REMOVE PART ONE AND TWO REFS

As an experienced software engineer, power user and technology blogger who had considered the loss of digital agency for many years [1.1], my journey into this research space was an unusual one; I arrived with already-formed ideas about the nature of the problem. This was not an ideal match for the traditionally participant-led approach of HCI, where ideas and insights normally arise solely from one’s participants. However, through the discipline of the Digital Civics programme, the process of publishing peer-reviewed papers. Recognising that HDR issues would be unlikely to surface organically, I was able to use careful sensitisation [3.4.1], balanced and open questioning and neutrally-designed stimuli [3.4.2] in a way that elevated participant experience to be the primary source of data, to produce findings and discursive conclusions that are as much the participants as my own.

Along the way I discovered vital areas of literature and existing work, most notably the foundational work of Weiser, Abowd, Crabtree and others [2.3.1; 2.3.3], the sub-discipline of Human Data Interaction [2.3.2] and the emergent innovation around Personal Data Ecosystems and MyData [2.3.4]. Collectively through these discoveries, I solidified my existing understandings and was able to contextualise my evolving learning against the established research landscape.

As my understanding from the Case Studies coalesced into a clear, cross-validated understanding of what people want from data and from data holders [Chapter 6], this gave me the confidence to grow and evolve as a researcher; moving from investigatory or theoretical research towards activist exploration of how to work towards delivering these new capabilities in practice, enabled by the models and ideas I developed. This ultimately gave me the confidence to recognise that, in this body of work, I have identified a newly emergent design philosophy, that deserved to be named, scoped, and explored—the research agenda for better Human Data Relations.

I was especially lucky to find peripheral activities, especially with the BBC and Hestia.ai, that fitted so well alongside my research agenda. These activities slotted perfectly into the action research cycle [3.2.2; Figures 3.1 and 3.2] of my thesis, producing a powerful feedback loop where findings from the academic inquiry became immediately applicable to real-world design activities, while experiences of the real-life barriers to pursuit of the HDR goals helped to challenge and evolve the theoretical models (such as shared data interaction) and designs emerging from the Case Studies. The project contexts provided a place where emerging qualitative findings and design ideas could be tested and iteratively improved through attempts to operationalise them, producing exceptionally grounded and actionable learnings.

This dual research-and-practice approach has allowed me to push this thesis further than a traditional HCI study would allow, and underpins the two-part structure of this thesis, where in Part Two I leave behind the traditional researcher-as-observer stance and step forward into taking an active role as an expert in user-centred design (UCD) [3.2.1] and practical software interface and process design and innovation.

It has been a tremendous privilege to spend six years understanding in great detail the nature of the problems facing our data-centric society, to translate those impacts into tangible needs, and to be able to map out the landscape and possibilities for improving the way we relate to data. Through this research, I have discovered rich evidence to quantify and qualify the losses of agency I had observed, in a far greater level of detail than existing research. The programme has also given me space to experiment with using both GDPR and web-scraping to access data and push boundaries, to really embrace my role as an HDR activist and adversarial designer [3.2.1; Figure ARI7.1]. It has allowed me design and prototype new models and views of data and of information which have transformed the way I look at digital information and how we relate to it, in particular:

I hope these models, as well as the other contributions [1.2], can help others to develop their thinking in the same way, to become HDR-literate and contribute to the crusade for HDR reform that the world so desperately needs.

The collaborative opportunities have been significant. Without this PhD, I would never have had the opportunities to discuss and develop models for personal data interaction and improved ecosystem negotiability with experts at the BBC, Hestia.ai and the wider MyData community. Alongside these formal collaborations, I have disseminated ideas through blogs, tweets, workshop papers and lectures, which has helped to refine and clarify ideas but also to stimulate valuable discussions with interested people to gain feedback that helped develop the models and my own learning further.

This opportunity has opened doors that have allowed me to pivot my career towards putting these learnings into action, working on important projects [7.2OLD] to explore how data interaction reforms can be realised in practice, and how we can become not just innovators but social data activists. Thanks to this research, I am now a Senior UX Designer with BBC Newcastle, where I will be able to bring the benefits of this research to the general public through the design of services and interfaces that aim to inform, educate and entertain people in their everyday lives. I now know how to begin to have an impact, how to work on building that better HDR future I and my participants have imagined. It is the journey of a lifetime, and also one that is in many ways just beginning. I hope that my work and this thesis can contribute to a better, more human-centric digital world, and I can’t wait to see where this leads.

10.5 Legacy of This Thesis

to The Future of Human Data Relations

REITERATE THE CONTRIBUTIONS - the conclusion felt very short by comparison to levels of detail in chapter summaries. There are so many fascinating takeaways from this work. Need to move some of that detail into the conclusion, to really help guide the reader on the key takeaways. The intro already did this to an extent but good to reiterate at the end of the journey too.This is reorganisation by pulling from ‘Insights’, ‘Contributions’, other domains of PhD – it is a reordering of existing material you have. MOVE SOME DETAILS OF CONTRIBUTIONS ETC HERE FROM C1. An Understanding of what People want from Personal Data. Methodologies for Participatory Work around Personal Data. Best Practices and Design Guidelines for Systems and Processes involving Personal Data. Principles for Generative Action towards Better Data Relations. A Detailed and Actionable Research Agenda and Strategy for Empowerment and Systemic Change. UPDATE THIS SECTION TO INCLUDE SOME DETAIL AS WELL AS HIGH LEVEL recapping the value Recapping the overall value of HDR. Notes on operationalising: Actually your approach can be justified as something novel in its own right due to your placements helping you understand how to operationalise HDR. Need to unpack the lessons around operationalising. REMOVE REF TO PAR - REFERENCE my hybrid method instead refer back to the research gap in 2.2.5, make sure i have a clear story on how I addressed it. notes: Need to consider how insights are being balanced when there are tensions? For example, making data united/unified could be quite controversial when linking data may present new risks of identification that may have not been there before. Or knowing the data’s provenance (particularly if implicating personal data of others, which can often be the case as it is often interpersonal and collected across multiple persons, in multioccupancy homes for example). This nature of data implicating others is recognised as a research gap (s2.2.5 – data beyond the individual), but wondered how you deal with that gap? weave in observation that data should not be an owned commodity; instead, it is reflective of something real, which is an aspect of the human, and hence not anyone else’s to separate/extricate them from without their involvement/consent. inadequacies not just of regulation but of law. (and the possible insolubility of that - e.g. a photo of someone else). notes: Also, how to manage balance of data not being seen as something that can be ‘owned’, but instead as part of a fundamental human right to private and family life. Risk in PIMS systems of seeing data as something once it is controlled that may then be commodity to be used as people want. Yet, paternalism of HR law says everyone has inalienable rights in such data, and can’t get to point where some can afford to keep privacy whilst others need to ‘transact’ with their data. Make impacts clearer: methodology adopted, informing industrial research and policy, key pubs & dissemination, five types data model. ref viva slides.

This thesis offers a detailed understanding of individual needs around data interaction and data-centric service relationships [Chapter 6], backed by participatory action research in both public sector and private sector Case Studies [Chapter 4; Chapter 5], providing a clear answer to the two primary research questions RQ1 [3.3.1] and RQ2 [3.3.2]: People want visible, understandable and useable11 data, process transparency, individual oversight capabilities and involvement in decision making.

Furthermore, based on a solid grounding in existing literature, policy and innovation around Data Access, Personal Information Management, Human Data Interaction and Human-centric Innovation [Chapter 2], these needs are synthesised into a clearly-defined new agenda for future research and innovation, Human Data Relations (HDR) [7.2], encompassing four clear objectives [Chapter 8] for improving individual agency and societal power imbalances around data:

  1. data awareness & understanding,
  2. data useability11,
  3. data ecosystem awareness & understanding, and
  4. data ecosystem negotiability.

The inclusion of Chapters 7, 8 and 9 took the thesis much further than a traditional HCI PhD, drawing on the author’s experiences with the practical pursuit of better Human Data Relations in four different real-world academic and industrial project settings [7.2OLD]. Through additional insights, designs and implementation strategies [Chapter 9], the thesis offers not just a theoretical frame for this area of research, but clear and actionable insights that could be immediately explored by researchers and innovators - an anthology of reference material, designs and strategies for HDR reform. This practical contribution of the thesis is delivered in four distinct parts:

Update 1.2 refs in next para

Through its Case Studies, this thesis has made additional contributions to the fields of Early Help and GDPR Data Access, detailed in [1.2.3] and [1.2.4]. Nine publications, workshops and presentations of the work in this thesis have been delivered [1.3], and this body of research has already contributed value to real-world industrial projects at BBC R&D in the UK, Hestia.ai in Switzerland and their client Sitra in Finland.

Through the grounded and detailed references and examples in Section IV, this work moves beyond conducting research to understand human personal data wants, and sets the scene for an progressive and activist agenda to take action in service of those wants, with the objective to reconfigure society to one where those human-centric needs are better met. It constitutes a call to arms for future research, innovation and activism in Human Data Relations, combined with a detailed guide to understand the data economy landscape, what needs to change, and an arsenal of design and implementation strategies for how HDR reformers might fulfil their role as a recursive public [7.6]. Armed with these insights, practitioners of HDR reform can drive us towards a better future to deliver increased agency for individuals, greater data use capabilities, and a more balanced landscape around the use of personal data by service providers across society—in short, a better world for us all.


Bibliography

good

Abowd, G. D. (2012) What next, ubicomp?: celebrating an intellectual disappearing act, in Proceedings of the 2012 ACM conference on ubiquitous computing. New York, New York, USA: ACM Press, pp. 31–40. doi: http://dx.doi.org/10.1145/2370216.2370222.
Allen, D. (2015) Getting things done: The art of stress-free productivity. Penguin.
Ananya (2020) ‘Java: Pass by value or pass by reference’, Medium. available at: https://medium.com/swlh/java-passing-by-value-or-passing-by-reference-c75e312069ed.
Authors, V. (2022) ‘Naive bayes classifiers’, GeeksForGeeks. available at: https://www.geeksforgeeks.org/naive-bayes-classifiers/.
Bakshy, E., Messing, S. and Adamic, L. A. (2015) ‘Exposure to ideologically diverse news and opinion on facebook’, Science. American Association for the Advancement of Science, 348, pp. 1130–1132. doi: 10.1126/SCIENCE.AAA1160/SUPPL_FILE/PAPV2.PDF.
Bansal, A. (2018) ‘An introduction to SOLID, tim berners-lee’s new, re-decentralized web’, FreeCodeCamp. available at: https://www.freecodecamp.org/news/an-introduction-to-solid-tim-berners-lees-new-re-decentralized-web-25d6b78c523b/.
Barclay, L. (2021) ‘Facebook banned me for life because i help people use it less’, Slate.
Bergman, O., Beyth-Marom, R. and Nachmias, R. (2003) The user-subjective approach to personal information management systems, Journal of the American Society for Information Science and Technology, 54(9), pp. 872–878. doi: 10.1002/asi.10283.
Berners-Lee, T. (2022) ‘Solid: Sir tim berners-lee’s vision of a vibrant web for all’. Inrupt. available at: https://inrupt.com/solid/.
Berners-Lee, T., Hendler, J. and Lassila, O. (2001) The Semantic Web, Scientific American, 284(5), pp. 34–43. available at: https://jstor.org/stable/10.2307/26059207.
Bogers, S. et al. (2016) ‘Connected baby bottle’, pp. 301–311. doi: 10.1145/2901790.2901855.
Bowyer, A. (2011) Why files need to die. available at: http://radar.oreilly.com/2011/07/why-files-need-to-die.html.
Bowyer, A. (2017) ‘Designing for human autonomy: The next challenge that civic HCI must address’. available at: https://eprints.ncl.ac.uk/273832.
Bowyer, A. (2018) ‘A grand vision for post-capitalist HCI: Digital life assistants"’, CHI Workshops 2018. available at: https://eprints.ncl.ac.uk/273826.
Bowyer, A. et al. (2018) Understanding the Family Perspective on the Storage, Sharing and Handling of Family Civic Data, in Conference on human factors in computing systems - proceedings. New York, New York, USA: ACM Press, pp. 1–13. doi: 10.1145/3173574.3173710.
Bowyer, A. (2020) ‘Design research for cornmarket PDS, recommender & associated permissions: Report by alex bowyer (BBC research intern/open lab PhD)’. available at: https://bit.ly/bbc-pds-research-bowyer.
Bowyer, A., Pidoux, J., et al. (2022) Digipower technical reports: Auditing the data economy through personal data access. doi: 10.5281/zenodo.6554177.
Bowyer, A., Holt, J., et al. (2022) ‘Human-GDPR interaction : Practical experiences of accessing personal data’, CHI ’22.
Braun, A. (2018) ‘Which email providers are scanning your emails?’, maketecheasier. available at: https://www.maketecheasier.com/which-email-providers-scanning-emails/ (accessed: 21 August 2022).
Chalmers, M., MacColl, I. and Bell, M. (2003) ‘Seamful design: Showing the seams in wearable computing’, IEE Eurowearable.
Cockton, G. (2004) ‘Value-centred HCI’, NordiCHI.
Cowburn, P. (2023) ‘26 civil society groups call on government to scrap data protection and digital information (DPDI) bill’, Open Rights Group Press Releases. available at: https://www.openrightsgroup.org/press-releases/26-civil-society-groups-call-on-government-to-scrap-data-protection-and-digital-information-dpdi-bill/.
Croll, A. and Bowyer, A. (2009) ‘Why email clients need to change’. available at: https://web.archive.org/web/20120128050842/http://gigaom.com/2009/04/24/why-email-clients-need-to-change/.
Deckers, E. (2018) ‘Data-enabled design’, UXDX. available at: https://uxdx.com/blog/data-enabled-design/.
Dehaye, P.-O. (2017) ‘Facebook forced to disclose more information about its ad targeting’, Medium. available at: https://medium.com/personaldata-io/facebook-forced-to-disclose-more-information-about-its-ad-targeting-7e6c0127722.
digi.me (2019) ‘Digi.me private sharing: See how you can do more with your personal data’. YouTube video; YouTube. available at: https://www.youtube.com/watch?v=pGcnK_KraXs.
Faife, C. (2021) ‘Facebook rolls out news feed change that blocks watchdogs from gathering data’, The Markup. available at: https://themarkup.org/citizen-browser/2021/09/21/facebook-rolls-out-news-feed-change-that-blocks-watchdogs-from-gathering-data.
Forrester, I. (2021) ‘Talking about human values and design’, BBC Research & Development. available at: https://www.bbc.co.uk/rd/blog/2021-07-talking-about-human-values-and-design.
Foucault-Dumas, C. (2021) “This case forces uber to do a lot more than ever before”, says data protection expert’, Medium. available at: https://medium.com/personaldata-io/uber-vs-drivers-trial-interview-data-protection-expert-rene-mahieu-55359f8cdd9d.
Freeman, E. and Gelernter, D. (1996) Lifestreams: A Storage Model for Personal Data, SIGMOD Record (ACM Special Interest Group on Management of Data). Association for Computing Machinery (ACM), 25(1), pp. 80–86. doi: 10.1145/381854.381893.
Gitelman, L. (2013) Raw data is an oxymoron. edited by Lisa Gitelman. MIT Press, p. 182. available at: https://mitpress.mit.edu/books/raw-data-oxymoron.
Goffe, L. et al. (2021) ‘Appetite for disruption: Designing human-centred augmentations to an online food ordering platform’, in 34th british HCI conference, pp. 155–167.
Goffe, L. et al. (2022) ‘Web augmentation for well-being: The human-centred design of a takeaway food ordering digital platform’, Interacting with Computers.
Golder, S. A. and Huberman, B. A. (2006) ‘Usage patterns of collaborative tagging systems’, Journal of Information Science, 32, pp. 198–208. doi: 10.1177/0165551506062337.
Hart-Davidson, W., Zachry, M. and Spinuzzi, C. (2012) Activity streams: Building context to coordinate writing activity in collaborative teams, in SIGDOC’12 - proceedings of the 30th ACM international conference on design of communication. New York, New York, USA: ACM Press, pp. 279–287. doi: 10.1145/2379057.2379109.
Havlak, H. and Abelson, B. (2016) ‘The definitive list of what everyone likes on facebook’, The Verge. available at: https://www.theverge.com/2016/2/1/10872792/facebook-interests-ranked-preferred-audience-size.
Horwitz, J. et al. (2021) ‘The facebook files’, Wall Street Journal. available at: https://www.wsj.com/articles/the-facebook-files-11631713039.
Jeffers, S. and Webb, L. K. (2017) ‘About who targets me’.
Jensen, C. et al. (2010) ‘The life and times of files and information: A study of desktop provenance’.
Jones, W. (2011) The Future of Personal Information Management Part I: Our Information, Always and Forever.
Kollnig, K., Datta, S. and Kleek, M. V. (2021) ‘I want my app that way: Reclaiming sovereignty over personal devices; i want my app that way: Reclaiming sovereignty over personal devices’, CHI Extended Abstracts. ACM. doi: 10.1145/3411763.3451632.
Lemley, M. A. (2021) ‘The splinternet’, Duke Law Journal, pp. 1397–1428. available at: https://perma.cc/92LZ-B8DN].
Li, I., Forlizzi, J. and Dey, A. (2010) Know thyself: Monitoring and reflecting on facets of one’s life, Conference on Human Factors in Computing Systems - Proceedings, pp. 4489–4492. doi: 10.1145/1753846.1754181.
Lindley, S. E. et al. (2018) Exploring new metaphors for a networked world through the file biography, Conference on Human Factors in Computing Systems - Proceedings, 2018-April, pp. 1–12. doi: 10.1145/3173574.3173692.
Lomas, N. (2021) ‘Dutch court rejects uber drivers’ “robo-firing” charge but tells ola to explain algo-deductions’, TechCrunch. available at: https://techcrunch.com/2021/03/12/dutch-court-rejects-uber-drivers-robo-firing-charge-but-tells-ola-to-explain-algo-deductions/.
Macaskill, E. et al. (2013) ‘NSA files decoded: Edward snowden’s surveillance revelations explained’, The Guardian. available at: https://www.theguardian.com/world/interactive/2013/nov/01/snowden-nsa-files-surveillance-revelations-decoded.
Mahieu, R. L. P. and Ausloos, J. (2020) Harnessing the collective potential of GDPR access rights : towards an ecology of transparency, Internet Policy Review. available at: https://policyreview.info/articles/news/harnessing-collective-potential-gdpr-access-rights-towards-ecology-transparency/1487.
Mahieu, R. and Ausloos, J. (2020) Recognising and Enabling the Collective Dimension of the GDPR and the Right of Access.
Maldre, M. (2012) ‘Amazon makes kindle less social -’. available at: https://www.mattmaldre.com/2012/10/01/amazon-makes-kindle-less-social/.
Marshall, C. (2019) ‘What is named entity recognition (NER) and how can i use it?’, Medium. available at: https://medium.com/mysuperai/what-is-named-entity-recognition-ner-and-how-can-i-use-it-2b68cf6f545d.
Martin, M. (2022) ‘The trustworthy, governable platform: A concept of safe social spaces in the public network’. in prep.
McAdams, D. P. (1996) ‘Personality, modernity, and the storied self: A contemporary framework for studying persons’, Psychological inquiry. Taylor & Francis, 7(4), pp. 295–321.
Meschtscherjakov, A., Wilfinger, D. and Tscheligi, M. (2014) Mobile attachment- Causes and consequences for emotional bonding with mobile phones, Conference on Human Factors in Computing Systems - Proceedings, pp. 2317–2326. doi: 10.1145/2556288.2557295.
Miagkov, A., Gillula, J. and Cyphers, B. (2019) ‘Google’s plans for chrome extensions won’t really help security’, Electronic Frontier Foundation. available at: https://www.eff.org/deeplinks/2019/07/googles-plans-chrome-extensions-wont-really-help-security.
Miller, A. (2021) ‘The right to repair movement, explained – BMC software | blogs’, The Business IT Blog. available at: https://www.bmc.com/blogs/right-to-repair/.
Moore-Colyer, R. (2022) ‘Apple launches iPhone self service repair kits — but there’s a big catch | tom’s guide’, Tom’s Guide. available at: https://www.tomsguide.com/news/apple-launches-iphone-self-service-repair-kits-but-theres-a-big-catch.
Neff, G. (2013) Why Big Data Won’t Cure Us, Big Data, 1(3), pp. 117–123. doi: 10.1089/big.2013.0029.
Owen, L. H. (2012) ‘Betaworks’ findings shifts to web clipping, as amazon bans kindle clips’, GigaOm. available at: https://web.archive.org/web/20120922222936/http://gigaom.com/2012/09/18/betaworks-findings-pivots-as-amazon-bans-kindle-clips/.
P., R. (2021) ‘Web scraping extensions: The easy gateway to web data’, Medium. available at: https://raluca-p.medium.com/web-scraping-extensions-the-easy-gateway-to-web-data-40e8592e13bf.
Pidoux, J. et al. (2022) Digipower technical reports: Understanding influence and power in the data economy. doi: 10.5281/zenodo.6554155.
Reber, M. and Duffy, A. (2005) ‘Value centred design: Understanding the nature of value’, International Conference on Engineering Design.
Schneider, H. et al. (2018) Empowerment in HCI - A survey and framework, in Conference on human factors in computing systems - proceedings. Association for Computing Machinery. doi: 10.1145/3173574.3173818.
Schrems, M. (2017) ‘Noyb.eu - our detailed concept’, noyb.eu. available at: https://noyb.eu/en/our-detailed-concept.
‘ShapeRepo: Make your apps interoperable’ (2022). available at: https://shaperepo.com/.
Slater, C. L. (2003) ‘Generativity versus stagnation: An elaboration of erikson’s adult stage of human development’, Journal of Adult development. Springer Nature BV, 10(1), pp. 53–65.
Solon, O. (2012) ‘How much data did facebook have on one man? 1,200 pages of data in 57 categories’, Wired. available at: https://www.wired.co.uk/article/privacy-versus-facebook.
Storni, C. (2014) The problem of De-sign as conjuring: Empowerment-in-use and the politics of seams, Proceedings of the 13th Participatory Design Conference on Research Papers - PDC ’14. New York, New York, USA: ACM Press, pp. 161–170. doi: 10.1145/2661435.2661436.
Teevan, J. B. (2001) Displaying dynamic information, in Conference on human factors in computing systems - proceedings, pp. 417–418. doi: 10.1145/634067.634311.
Tett, G. (2022) “Right to repair” campaign forces rethink by big tech’, Financial Times. available at: https://www.ft.com/content/aabe2aee-cd2b-42e4-9a0b-51f838da89db.
Tufecki, Z. (2019) ‘We are tenants on our own devices’, Wired. available at: https://www.wired.com/story/right-to-repair-tenants-on-our-own-devices/.
Various Authors (2022) ‘ARIA accessibility tags’, Mozilla Developer Network. available at: https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA.
Verborgh, R. (2017) ‘Paradigm shifts for the decentralized web’. available at: https://ruben.verborgh.org/blog/2017/12/20/paradigm-shifts-for-the-decentralized-web/.
Watzman, N. (2021) ‘The political ads facebook won’t show you’, Medium. available at: https://medium.com/cybersecurity-for-democracy/the-political-ads-facebook-wont-show-you-e0d6181bca25.
Weiser, M. (1991) The computer for the 21st century, Scientific American, 265(3), pp. 94–105. doi: 10.1145/329124.329126.
Weiser, M. (1994) ‘Creating the invisible interface: (Invited talk)’, in Proceedings of the 7th annual ACM symposium on user interface software and technology, p. 1.
Weiser, M. and Brown, J. S. (1997) ‘The coming age of calm technology’, in Beyond calculation. Springer, pp. 75–85.
‘What is text mining and content analytics?’ (2022). OpenText. available at: https://www.opentext.com/products-and-solutions/products/ai-and-analytics/opentext-magellan/magellan-text-mining.
‘Worker info exchange’ (2022). available at: https://www.workerinfoexchange.org/.